68 research outputs found

    Pavel A. Florenskij. Organizzazione dello spazio, arte, cultura come unitĂ 

    Get PDF
    In the early decades of the 20th century, the reflection about the deyatel’ kul’tury is one of the most important chapters of the russian philosophy: the conception of space and time proposed by Pavel A. Florenskij – as his result in the teatrise Analiz prostranstvennosti i vremeni – is a relevant part of it. Science, philosophy and art find their roots in life, and the study of the space is the start to understand themselves. Deyatel’ kul’tury, according Florenskij, does culture: he assumes the task of planting border’s poles, outlining the shortest paths through a system of isopotential lines. Through references to the debate between Symbolists, in the Analiz we can read in a new way the origin of sculpture, theatre, cinema, paint, poetry: the organization of the space can reach man’s consciousness, but the space already exists, as exists the life itself in which the art has its roots.In the early decades of the 20th century, the reflection about the deyatel’ kul’tury is one of the most important chapters of the russian philosophy: the conception of space and time proposed by Pavel A. Florenskij – as his result in the teatrise Analiz prostranstvennosti i vremeni – is a relevant part of it. Science, philosophy and art find their roots in life, and the study of the space is the start to understand themselves. Deyatel’ kul’tury, according Florenskij, does culture: he assumes the task of planting border’s poles, outlining the shortest paths through a system of isopotential lines. Through references to the debate between Symbolists, in the Analiz we can read in a new way the origin of sculpture, theatre, cinema, paint, poetry: the organization of the space can reach man’s consciousness, but the space already exists, as exists the life itself in which the art has its roots

    NoXperanto: Crowdsourced Polyglot Persistence

    No full text
    This paper proposes NoXperanto , a novel crowdsourcing approach to address querying over data collections managed by polyglot persistence settings. The main contribution of NoXperanto is the ability to solve complex queries involving different data stores by exploiting queries from expert users (i.e. a crowd of database administrators, data engineers, domain experts, etc.), assuming that these users can submit meaningful queries. NoXperanto exploits the results of meaningful queries in order to facilitate the forthcoming query answering processes. In particular, queries results are used to: (i) help non-expert users in using the multi- database environment and (ii) improve performances of the multi-database environment, which not only uses disk and memory resources, but heavily rely on network bandwidth. NoXperanto employs a layer to keep track of the information produced by the crowd modeled as a Property Graph and managed in a Graph Database Management System (GDBMS)

    False Data Injection Impact on High RES Power Systems with Centralized Voltage Regulation Architecture

    Get PDF
    The increasing penetration of distributed generation (DG) across power distribution networks (DNs) is forcing distribution system operators (DSOs) to improve the voltage regulation capabilities of the system. The increase in power flows due to the installation of renewable plants in unexpected zones of the distribution grid can affect the voltage profile, even causing interruptions at the secondary substations (SSs) with the voltage limit violation. At the same time, widespread cyberattacks across critical infrastructure raise new challenges in security and reliability for DSOs. This paper analyzes the impact of false data injection related to residential and non-residential customers on a centralized voltage regulation system, in which the DG is required to adapt the reactive power exchange with the grid according to the voltage profile. The centralized system estimates the distribution grid state according to the field data and provides the DG plants with a reactive power request to avoid voltage violations. A preliminary false data analysis in the context of the energy sector is carried out to build up a false data generator algorithm. Afterward, a configurable false data generator is developed and exploited. The false data injection is tested in the IEEE 118-bus system with an increasing DG penetration. The false data injection impact analysis highlights the need to increase the security framework of DSOs to avoid facing a relevant number of electricity interruptions

    Keyword based Search over Semantic Data in Polynomial Time

    Get PDF
    Abstract-In pursuing the development of Yanii, a novel keyword based search system on graph structures, in this paper we present the computational complexity study of the approach, highlighting a comparative study with actual PTIME state-ofthe-art solutions. The comparative study focuses on a theoretical analysis of different frameworks to define complexity ranges, which they correspond to, in the polynomial time class. We characterize such systems in terms of general measures, which give a general description of the behavior of these frameworks according to different aspects that are more general and informative than mere benchmark tests on a few test cases. We show that Yanii holds better performance than others, confirming itself as a promising approach deserving further practical investigation and improvement

    CDX-2 expression correlates with clinical outcomes in MSI-H metastatic colorectal cancer patients receiving immune checkpoint inhibitors

    Get PDF
    Immune checkpoint inhibitors (ICIs) showed efficacy in metastatic colorectal cancer (mCRC) with mismatch-repair deficiency or high microsatellite instability (dMMR-MSI-H). Unfortunately, a patient's subgroup did not benefit from immunotherapy. Caudal-related homeobox transcription factor 2 (CDX-2) would seem to influence immunotherapy's sensitivity, promoting the chemokine (C-X-C motif) ligand 14 (CXCL14) expression. Therefore, we investigated CDX-2 role as a prognostic-predictive marker in patients with mCRC MSI-H. We retrospectively collected data from 14 MSI-H mCRC patients treated with ICIs between 2019 and 2021. The primary endpoint was the 12-month progression-free-survival (PFS) rate. The secondary endpoints were overall survival (OS), PFS, objective response rate (ORR), and disease control rate (DCR). The PFS rate at 12 months was 81% in CDX-2 positive patients vs 0% in CDX-2 negative patients (p = 0.0011). The median PFS was not reached (NR) in the CDX-2 positive group versus 2.07 months (95%CI 2.07-10.8) in CDX-2 negative patients (p = 0.0011). Median OS was NR in CDX-2-positive patients versus 2.17 months (95% Confidence Interval [CI] 2.17-18.7) in CDX2-negative patients (p = 0.026). All CDX-2-positive patients achieved a disease response, one of them a complete response. Among CDX-2-negative patients, one achieved stable disease, while the other progressed rapidly (ORR: 100% vs 0%, p = 0.0005; DCR: 100% vs 50%, p = 0.02). Twelve patients received 1st-line pembrolizumab (11 CDX-2 positive and 1 CDX-2 negative) not reaching median PFS, while two patients (1 CDX-2 positive and 1 CDX-2 negative) received 3rd-line pembrolizumab reaching a median PFS of 10.8 months (95% CI, 10.8-12.1; p = 0.036). Although our study reports results on a small population, the prognostic role of CDX-2 in CRC seems confirmed and could drive a promising predictive role in defining the population more sensitive to immunotherapy treatment. Modulating the CDX-2/CXCL14 axis in CDX-2-negative patients could help overcome primary resistance to immunotherapy

    Crossing the finish line faster when paddling the Data Lake with kayak

    No full text
    Paddling in a data lake is strenuous for a data scientist. Being a loosely-structured collection of raw data with little or no meta-information available, the difficulties of extracting insights from a data lake start from the initial phases of data analysis. Indeed, data preparation, which involves many complex operations (such as source and feature selection, exploratory analysis, data profiling, and data curation), is a long and involved activity for navigating the lake before getting precious insights at the finish line. In this framework, we demonstrate kayak, a framework that supports data preparation in a data lake with ad-hoc primitives and allows data scientists to cross the finish line sooner. kayak takes into account the tolerance of the user in waiting for the primitives' results and it uses incremental execution strategies to produce informative previews of these results. The framework is based on a wise management of metadata and on features that limit human intervention, thus scaling smoothly when the data lake evolves

    Renewable Energy Data Sources in the Semantic Web with OpenWatt

    No full text
    Although the sector of renewable energies has gained a significant role, companies still encounter considerable barriers to scale up their business. This is partly due to the way data and information are (wrongly) managed. Often, data is: partially available, noisy, inconsistent, sparse in heterogeneous sources, unstructured, represented through non-standard and proprietary formats. As a result, energy planning tasks are semi-automatic or, in the worst cases, even manual. As a result, the process that uses such data is exceedingly complex and results to be error-prone and ineffective. OpenWatt aims at establishing an ideal scenario in the renewable energy sector where different categories of data are fully integrated and can synergically complement each other. In particular, OpenWatt overcomes existing drawbacks by introducing the paradigm of Linked Open Data to represent renewable energy data on the (Semantic) Web. With OpenWatt, data increases in quality, tools become interoperable with each other and the process gains in usability, productivity and efficiency. Moreover, OpenWatt enables and favours the development of new applications and services

    Augmented Access for Querying and Exploring a Polystore

    No full text

    Random Query Answering with the Crowd

    No full text
    Random data generators play an important role in computer science and engineering since they aim at simulating reality in IT systems. Software random data generators cannot be reliable enough for critical applications due to their intrinsic determinism, while hardware random data generators are difficult to integrate within applications and are not always affordable in all circumstances. We present an approach that makes use of entropic data sources to compute the random data generation task. In particular, our approach exploits the chaotic phenomena happening in the crowd. We extract these phenomena from social networks since they reflect the behavior of the crowd. We have implemented the approach in a database system, RandomDB, to show its efficiency and its flexibility over the competitor approaches. We used RandomDB by taking data from Twitter, Facebook and Flickr. The experiments show that these social networks are sources to generate reliable randomness and RandomDB a system that can be used for the task. Hopefully, our experience will drive the development of a series of applications that reuse the same data in several and different scenarios
    • …
    corecore